Skip to main content
3:00 am
sophisticated fault crossings in the world right now in the bay area. some of the investment is geared towards creating more ipblds for our water facility. i realize the name of the game today is about coordination and cooperation and when i speak about independence i'm not talking about both sides of my mouth here, but being independent, steering some of our stuff to satellite phones, no offense, our verizon friends, and our pg&e friends, we're not going to count on when things get bad. steering some of this investment to independent providers we are not planning on depending on anybody. not only does that make easier for me keeping more things under my control, it makes things easier for them because they will have one less
3:01 am
person calling them. even though it's all about cooperation, sometimes independence comes into the conversation. the last thing i would touch on is it's personal. by that i mean i know a lot of people in this room i can call and depend on if there's an emergency in san francisco. they know me, they know what i do, i saw them all the time, we know what our roles are. we had a little mini exercise about a month ago with a minor pipe break in san francisco where we exercised these relationships. i won't talk about all the meet and greet opportunities to get to know each other. i won't go into detail on all the areas, but we do work together on a lot of forums and it does come down to relationships in a lot of these areas of cooperation. >> thank you.
3:02 am
mr. angelus. >> good afternoon. we at verizon wireless we believe our most important responsibility to make sure our network is not only available, but reliable. so to address the question of resiliency, i would like to cover 3 important points that we do day-to-day to make sure our network is available and ready to use for the public. no. 1 is back-up systems. so just to give you a brief idea of how our network works, we have two major components of our network, we have our cet sites and our switches. in our cet sites, in all of our cell sites, we actually have batteries to account for power out ages. that's 8 hour back-up time for our batteries and in addition, most of our cell sites actually have back-up generators as well. we have about 4 days of stand by time to about maybe 10 days and then on top of that we have
3:03 am
vendors that we work with that are on stand by to make sure that we can refuel our cell sites and also maintain them if our generators do go out of service. and then as far as our switches, so the switches are control centers that manage all our cell sites. we have about 300 cell sites per switch and in the bay area we have about 4 switches. what we have there is also back-up power so we have batteries, we have generators, not only one generator, sometimes we have 3 generators to account for failure for one of those generators. then in these switchers we have servers, fiber optics, all kinds of equipment to process calls from one point to another and we have back-up systems on top of back-up systems to support that. every failure point pretty much is addressed in the design of our network. one thing that's important to
3:04 am
mention as well is not only to make sure the network is available to handle calls but also do we have enough capacity to support surges of traffic? so in cases of emergency, did we have the resources available for that? well, no network is perfect that can handle unlimited amount of traffic. our network can handle about 200 to 300 percent increase of traffic for emergency situations. the last thing i wanted to pipbt out as far as partnership with our local government, the way that our organization works, we have a regional approach to our operation. so that way we can actually be closely tied in to all the regional offices and local governments that we work with. so that way we can establish relationships, in fact we work very closely with don's group in all the emergency situations in the past summer for all the wild fire scenarios in california we work closely with his group to make sure we can
3:05 am
respond quickly to these questions. thank you. >> thank you. all right, our second question, what strategies do you have in place to avoid any long-term interruption of services to our residents and private sector partners and have you tested them in a meaningful way to ensure their effectiveness? shall we start with you, mr. johnson? >> i'd be happy to start. in terms of testing for their effectiveness, they are tested almost every day with pg&e service territory we probably have 80 electrical outages every day. we have opportunities to test our emergency centers on a regular basis and in fact there's an emergency center open today just for some of the work that's happening throughout the service territory. so in terms of just our effectiveness it's something we do each and every day, unfortunately it's something we do every day but at least it's
3:06 am
there. our infrastructure, we don't want to have outages, we design so we minimize that damage but we always know there will be things mother nature throws at us that we can't design around whether it's an earthquake or a major storm. we get challenged with major storms throughout the service territory so it's not unusual that a major storm comes by every year and takes out about a fifth of our base. we have service responders on our own teams well in excess of a hundred a year. in terms of preparedness, we have emergency pipe yards, we have emergency stand by facilities, we have vendors who are on emergency stand by so those facilities, the material we might need, we have plenty of it and it's stored throughout the entire service territory. we have 3 major services, supply service chains, that are open and available to us 24/7 if we need
3:07 am
them. we also have contractor arrangements, maybe it's pipeline welding or electrical service work, line man work, and we also have mutual service agreements and pg&e participates in those. we have sent folks to hawaii and we have sent folks back east and we have certainly sent folks to oregon and washington and idaho on a regular basis as we all in the west suffered through storms. so we have those activities in place and then of course we do just our normal emergency response activations on an on-going basis just to make sure everyone is trained and veilable and ready to go. in terms of the big 1 that we always talk about, we have an earthquake play book. the gas side and electric side has very specific actions to take, we have very trat strategic
3:08 am
actions to take, we have a hard copy and hopefully we'll never need it but if a big earthquake comes, it can happen anywhere in the stais, there's earthquake faults all over this place, we're ready to go. >> mr. boland. >> we take a little different role on this seeing as we represent a multiple discipline of utilities, we are engaged in california national guard in their exercises. we handle the utility operations center which is wholly operated within the state's operation center. we do the exercises with calima to golden guarden all the way down to the specific exercises. we interface with each and every one of our utilities and their exercise programs. we carry the best practices from one utility to the next to share that knowledge across discipline. we then execute
3:09 am
decisions on the catastrophic plan for california, the one coming up here in the bay area so all utilities understand their role that needs to be played and we can plug and play and the class ral utilities that are going to be called upon to support it from the private and municipal side and this is an on-going living process. working with the local jurisdictions, the county jurisdictions always, the states north and south and the state operations center and then supporting whatever the fema initiatives are moving out from that in cross-discipline lines. >> thank you. mr. brig. >> as an operator of a distribution system, the stories are probably very similar. we have small emergencies every day, small, medium and even large, so the back-up systems, the screening, the ics structure, the spare
3:10 am
parts, the communication, those muscles are flexed quite often. the big one is the one we design for and spend a lot of our time and energy preparing for. the big one for us is the design earthquake, it's the max credible earthquake, san an drais a is basically a 8.0, a repeat of the 1906 earthquake. some of the components can be tested, we can simulate an earthquake on some of these facilities as they are in design to make sure they can withdraw that level of earthquake. we're designing for an 8.0 and that's not what happened in japan in 2011. the earthquake that hit japan in 2011 was not what they were designing for and all the assumptions went out the window so that's food for thought. >> thank you. mr. angelus.
3:11 am
>> at verizon wireless we try to do it day in and day out to avoid interruption in services. we try to put all this in place but in the event they do happen, we have those redundant systems i discussed earlier. weak route traffic from one side to another or from one switch to another and we test regularly to make sure they occur. and this is transparent to our customers. calls are being routed to another switch during these cross over tests. we have actually fleet of portable cell sites, these are cells that are on trucks or in trailers that are deployed, that actually are available within a market so we can deploy them in the cases of emergency so if the cet site actually goes out of service, we can deploy a september
3:12 am
temporary cell site. we have these systems available for our emergency departments so if they need coverage in certain areas that are not covered very well, they can actually use those systems as well. so i did talk about the failure overcasts. i think this is the key portion of our industry, for our company. we want to make sure that we can take into account when these big situations occur and major interruptions happen in particular areas. >> thank you. okay, our last question, have you established standards for resilience in cooperation with other lifeline providers and how systems should perform in an earthquake? >> i'll go ahead and talk
3:13 am
about pg&e. i would say first off we've designed our own standards for what should happen in an earthquake or any other major emergency. our electric system is designed to worry about trees and wind and rain, which is what we see the most of, and tends to be the most damaging, but we have our own standards and our own expectations in terms of what our system should be able to withstand. add david pointed out, the risk is that an event will occur that is greater than what you have prepared for. that's always a possibility. in terms of working with others i think the important part is to make sure you understand what everyone is relying on you for. so it really comes back it an issue of priorities. what's going to come back first, what's going to come back second. for pg&e we always worry about bringing
3:14 am
electrical generation back, those are done in conjunction with our gas and electric together. most of the power plants in california are fed natural gas, you need to get the gas back up before you get the power plant back up if you have damage to both. we do have standards for all that activity in terms of our engineering design. the trick is for us to sit out there and talk with all the different regional areas including san francisco and make sure we understand how we're going to work together in the event we have an event that takes our services out or is greater than what we're actually expecting and that's the challenge for all of us, all the service providers, is working together to figure out how to make that happen. >> mr. boland. >> this is where we fit into that link. we represent the utilities that protect and build the resill yepbs into the infrastructure. we fill a gap in attitude which is the
3:15 am
relationships, distant and local relationships, cross boundaries between the multi disciplines in the utilities. we are able to cross those lines in the counties and step up to state operations so everybody is operating in a common operating picture so everybody understands what's available not only in their jurisdiction, but what kind of resources we can bring to bear, short and long-term, how distant those are, what the qualifications are. we have master resource catalogs designed just like fire scope and cal fire in which we have built strike teams from our utilities, strike teams from water companies. they simply make a call and tell us we need 10, 12, 15, it's our obligation to put that together and get it to them. they are worried about the incident in their jurisdiction which they have to correct. it's our responsibility to reach bond those borders as their extension to bring in the reserves that they need to maintain that kaupblt newt of
3:16 am
operation and then where we function through the state utility operations center and the state operations center to make sure that we have that kind of access and that kind of assistance. we need caltrans, we're going to need chp, we're going to need cal fire, we need dwr, they are invested in restoring their critical infrastructure and it's our responsibility to reach across those lines to get that kind of access to keep that kind of restoration underway. >> thank you. mr. brig. >> in terms of establishing standards for resill yepbs, absolutely, we have done that. again as i mentioned earlier, to get our customers to fund all these capital projects we did up a contract with them. this is what you're getting and this is what you're going to pay for. that had to be well defined as engineers need to know what to do and what to design for. establishing those standards, what we call levels of service,
3:17 am
there's levels of service for threats of terrorism and also the seismic one, which is essentially an 8.0 event on the san andreas fault. we're working with other city departments so they know what to expect from us. it's been an education process. as we started down the road i think there was an expectation all water and sewer was going to be in operation in san francisco after an ert quake. that probably is not going to happen. it's a little bit different having several blocks in your population out of water versus out of electricity or gas or cell phone service. it's a little bit different level of emergency. after an earthquake what we're designing for is to have the high level fire system more or less immediately. there may be homes, individual service connections, which could be out of water for quite some time
3:18 am
and that's where my utility has to interface with other departments to make sure we're getting water to people through humanitarian stations, red cross, mutual aid is a huge part of this with our federal and state partners. but those hand off points after a major event and educating ourselves what we're doing and not doing is a big part of the life line process that naomi is running and it's been very, very helpful. >> thank you. and mr. angelus. >> in terms of standards, similar to pg&e we have established our own internal standards on how resilient our network is. it's two hour quality of service how to design our capacity and also in how we perform, we do these failure tests to make sure the network can withstand the additional traffic being
3:19 am
transferred from one portion to another. we have an earthquake strategy binder which is available aupb lane for our employees that can be accessed. it's available in all of our switches. these are step by step directions what happens when this big one occurs. we also practice this on drills, we have very little drills within the company or within the region to make sure everybody is on the same page when this situation does occur. now, as far as our support with other agencies, you know, our first priority of course when this thing happens is to make sure our network is running. but we also have a number of hotline numbers available for the police department rtion for the local agencies to contact us if they need some adistance from us in case some of their own systems do go down and we have our own infrastructure to support them as well. >> all right, thank you. i think this is an opportunity for us to open it up for questions and answers. i think
3:20 am
we have some folks with microphones right over there, there's a gentleman. >> you talked a lot about network and grid resiliency. how do you guys approach your op center in context of resiliency of operations in terms of something like this ?oo ?a would you mind remeeting that one more time. >> you talked a lot about your grid and the resiliency. it's something we look at all the time if the ship sinks, who is the back up guy in charge. how do you guys approach that stuff. >> i'll go ahead and cover at least for pg&e. in terms of
3:21 am
our emergency centers and understanding what's happening there, we have our primary emergency center here in san francisco. we have on call personnel for both gas and electric and our generation facilities who have responsibilities upwards of about 80 people each and every day, have to be available, available to come in on any notice. we have back-up facilities that can operate out of walnut creek in many cases, we also have a major back-up facility in san ramon where we can duplicate everything that we have in san francisco. in terms of back-up facilities, in terms of our ability to operate if something goes wrong we can bring up our emergency centers and that's for our corporate emergency center. pg&e actually operates over 70,000 square miles. we have 19 division and 55 districts and every one of those districts has an emergency room
3:22 am
or what's often referred to as a storm room and any one of those can open up and handle any other location's facility so we in essence have at least 55 sites we can go to and try to operate from, but certainly with respect to the command structure and how we operate, we want to be in the emergency center for a major event such as an earthquake or major storm, we have 3 facilities that can handle that fairly easily and several that can do duty with a couple days notice. >> mr. boland. >> we are very fortunate, we are fully embedded with the state operations center. should the state decide to close the operations center and relocate, we will follow very closely. we also have 10 virtual centers to operate out of to support the state operation and support the utility industries if in fact that's called upon. >> mr. brig. >> similar response to the
3:23 am
pg&e in terms of back-up centers. the only thing i would add again is the human element. we make sure all the knowledge does not reside in one person. we have a lot of bridges in the area, not all our employees live in san francisco. so it may not be possible to get some of our senior managers here or key employees here quickly within a day, within two days, and it's a constant challenge to make sure we have documentation and broad training for whoever does arrive at the eoc, they are in charge until someone else gets there and that could be a long time. >> thank you. mr. angelus. >> for verizon wireless in northern california we have two separate offices, we have in walnut creek and also folsom near sacramento. we have another redundancy center in
3:24 am
texas. we have it all across our infrastructure and also with our teams. >> thank you. we have one question in the back. >> yes, my question is regarding your ability to bring in repair equipment or crews to make repairs or back-up equipment in case your infrastructure is broken. i know they have that capability but in this area, you know, many roads, bridges would be damaged. do you have your own internal aviation capability or lift capability to bring in those repair equipment and crews or is that something that you would be looking to other organizations to provide that airlift capability? >> start with mr. johnson. >> yeah, we have some limited capability in terms of aircraft and on site helicopters in our outskirting area, but in terms of a major event in the san francisco bay area, we would be
3:25 am
heavily focused on those folks who provide that service to us under contract. we do not have our own helicopters in oakland or san francisco itself, as was already mentioned to, san francisco is very difficult to get to in the case of a major event. it's either going to be getting people in or obviously getting across the bridges and getting material in. while we have the transport ability on the ground we don't have a lot of aircraft capability. most of our facilities wouldn't come in via aircraft anyway. the services we need, it's not typically getting the people, it's getting to the location you need to do the work. while we can drop folks via helicopter, we do that on a pretty regular basis during fires and storms up in the mountains, it's going to be able to get the roads clear and get the bridges open or get access into the location for which we need to get to.
3:26 am
in fact, we suffered tlie that in a lot of the big cities just even during commute times if there's a minor emergency getting tlau and we have a relationship with the city here in san francisco where the fire department and the first responders have reached out to us and will help us get there. i can't say we have that everywhere in our state. certainly in a major event, that would be our concern, our ability to get our work force into the bay area given the type of infrastructure we have here. >> thank you. mr. brig. >> i would add, we don't have aircraft that can lift heavy bull dozers so we would be looking to the state for that. that would be a mutual aid call pretty quick. >> verizon wireless would be the same. the most important thing is for us to get our resources where they need to work where they can repair or
3:27 am
reroute traffic where it needs to be. one of our main offices is located in the south bay. that way we don't have to worry about bridge access to get into the peninsula or san francisco if the situation occurs, but if the situation does occur, we're going it need some help from other agencies to make it happen for us. >> all right, thank you. and we have another question. >> roger that you guys deal with unexpected circumstances all the time, weather circumstances, things that are completely unexpected. i would be really interested in, as a result of your latest hot watches on an vent, whatever that event might have been, what was the big aha thing that you learned from that event? >> well, i don't know if it was an aha, unfortunately as i mentioned we get the opportunity to practice on a
3:28 am
pretty regular basis. i would say the issues that typically come up in a major event really are two-fold. one is communication, communications with everybody, in a major event we will have a thousand to two thousand people out working and understanding where every crew is at and every product on every job is a huge challenge to us. we're concentrating on bringing every customer back on and at the same time not sending a crew to a location we may already have somebody there. it ties directly to the comment i made earlier. getting folks to the location is a real challenge. this happens certainly in storms up in the mountainous areas, facilities are closed, trees are down, bridges are blocked, areas flood. so getting folks to those locations can be very difficult, we end up doing a lot of hand walk ins, a lot of
3:29 am
helicopters in with crews of 30 or 40 folks and communicating with them where there isn't cell coverage even if it did work is a big challenge. the things we focus on the majority of time is how do we improve our communications, what did rerun into this time that we didn't have in the past, how do we get through that and how can we improve getting folks to the facility and making sure we know where everybody is at at all times. >> thank you. mr. boland. >> one of the major challenges that we have incurred all the way from 2003 till now, is credentialing or badging of utility emergency responder personnel trying to gain access into a secure zone that needs their services. as a prepop drapbs of the heavy equipment sometimes falls under

November 3, 2012 3:00am-3:30am PDT

TOPIC FREQUENCY Mr. Boland 4, Brig 4, California 4, Angelus 4, Mr. Johnson 2, San 1, Prepop Drapbs 1, Pg&e 1, Calima 1, Chp 1, Caltrans 1, Don 1, Guarden 1, Roger 1, Naomi 1, Fema 1, Cal 1, Japan 1, Texas 1, Oakland 1
Network SFGTV2
Duration 00:30:00
Scanned in San Francisco, CA, USA
Source Comcast Cable
Tuner Channel 89 (615 MHz)
Video Codec mpeg2video
Audio Cocec ac3
Pixel width 544
Pixel height 480
Sponsor Internet Archive
Audio/Visual sound, color